EBM 2024: Lecture 1

Definition and Importance of Evidence-Based Medicine

Austin Meyer, MD, PhD, MS, MPH, MS

2024-08-30

Definition and Importance of Evidence-Based Medicine (EBM)

What is Evidence-Based Medicine?

  • Integration of:
    • Best research evidence
    • Clinical expertise
    • Patient values and preferences

Importance of EBM in Clinical Practice

  • Improves patient outcomes
  • Enhances quality of care
  • Promotes efficient use of resources
  • Keeps practitioners updated with latest research

The EBM Process

  1. Ask - Formulate clinical question
  2. Acquire - Gather relevant evidence
  3. Appraise - Evaluate evidence quality
  4. Apply - Integrate with patient care
  5. Assess - Evaluate outcomes

The EBM Process: Ask

Formulate a clear clinical question - PICO

The EBM Process: Acquire

  • Search for the best available evidence
  • Utilize databases like PubMed, Cochrane Library
  • Cochrane

The EBM Process: Appraise

The EBM Process: Apply

  • Integrate evidence with clinical expertise
  • Consider patient preferences and circumstances

The EBM Process: Assess

  • Evaluate the outcome of the decision
  • Reflect on the process and identify areas for improvement

Levels of Evidence

Question 1

A 7-year-old boy with a history of recurrent asthma exacerbations presents to the clinic. His parents are concerned about the long-term effects of inhaled corticosteroids and have read conflicting information online. They ask whether continuing the medication is the best option for their child. As a clinician using Evidence-Based Medicine (EBM), what is the first step you should take to address their concerns?

Question 2

A pediatric resident is reviewing a recent study that suggests a new treatment for pediatric eczema. The study is a randomized controlled trial (RCT) with a large sample size and demonstrates statistically significant results. According to the principles of Evidence-Based Medicine (EBM), which of the following best describes the importance of this study’s findings?

Question 3

Which of the following best exemplifies the integration of patient values and preferences in the practice of Evidence-Based Medicine (EBM) for a pediatric patient?

Research Methods Overview

Types of Clinical Research

Experimental vs. Observational Studies

  • Experimental Studies
    • Interventions are actively manipulated.
    • Example: Randomized Controlled Trials (RCTs).
  • Observational Studies
    • No intervention; researchers observe outcomes.
    • Example: Cohort studies, Case-control studies.

Study Designs

  • Randomized Controlled Trials (RCTs)
    • Gold standard for assessing causality.
    • Random assignment to intervention or control.
  • Cohort Studies
    • Observational; follows a group over time.
    • Can be prospective or retrospective.
  • Case-Control Studies
    • Compares individuals with a condition (cases) to those without (controls).
    • Efficient for studying rare diseases.
  • Cross-Sectional Studies
    • Snapshot at a single point in time.
    • Useful for prevalence studies.

Randomized Controlled Trials (RCTs)

Gold Standard for Causality

Cohort Studies

Observational Study Design

Case-Control Studies

Retrospective Study Design

Cross-Sectional Studies

Snapshot in Time

Understanding study type and its analysis

G A Did investigator assign exposures? A1 Yes A->A1 A2 No A->A2 B Experimental study D Random allocation? B->D C Observational study G Comparison group? C->G D1 Yes D->D1 D2 No D->D2 E Randomised controlled trial E1 Regression HR t-test ANOVA Chi-Square ITT E->E1 F Non-randomised controlled trial F1 Regression HR t-test ANOVA Chi-Square ITT F->F1 G1 Yes G->G1 G2 No G->G2 H Analytical study K Cohort H->K L Case-control H->L M Cross-sectional H->M I Descriptive I1 Summary statistics I->I1 K1 Regression HR t-test ANOVA Chi-Square RR K->K1 L1 Regression- (esp. Logistic) t-test Chi-Square McNemar's OR L->L1 M1 Regression t-test Chi-Square OR M->M1 A1->B A2->C D1->E D2->F G1->H G2->I

Critical Appraisal of Research Articles

Assessing Methodology

  • Study Design: Was the study design appropriate for the research question? (e.g., RCT, cohort, case-control)
  • Sample Size: Is the sample size sufficient to detect a meaningful difference or effect? (i.e. is there sufficient power)
  • Blinding and Randomization: Were the participants and researchers adequately blinded, and was the randomization process robust?
  • Outcome Measures: Are the outcome measures valid, reliable, and clinically relevant?

Assessing Bias in Research

Common Types of Bias

  • Selection Bias: Are the participants selected in a way that may skew the results?
    • Selection bias occurs when the participants included in a study are not representative of the target population, leading to results that may not be generalizable.
  • Information Bias: Is there a systematic error in how data is collected or classified?
    • Information bias occurs when there is a systematic error in the way data is collected, measured, or recorded in a study, leading to inaccuracies in the data that affect the study’s outcomes.
  • Confounding: Are there other variables that could influence the results, not accounted for?
    • Confounding occurs when an extraneous variable (a confounder) is associated with both the exposure and the outcome, leading to a spurious association between them.

Relevance to Clinical Practice

Assessing Relevance

  • External Validity: Can the results of the study be applied beyond the patients in the study?
  • Applicability: Can the study results be applied to your specific patient/population?
  • Patient Values and Preferences: Does the evidence align with your patients’ needs, circumstances, and preferences?

Applicability ExternalValidity PatientPreferences

Question 4

A pediatric researcher is planning a study to investigate the effectiveness of a new vaccine in preventing a common childhood infection. The researcher randomly assigns participants to either receive the new vaccine or a placebo, with both the participants and the clinicians blinded to the assignment. After a follow-up period, the incidence of the infection is compared between the two groups. Of the following, the BEST description of the study design used in this research is:

Question 5

In a study assessing the relationship between breastfeeding and the development of allergies, researchers follow a group of infants from birth until 5 years of age. The researchers document the breastfeeding status of the infants and monitor them for the development of allergies during this period. Which of the following BEST describes the study design?

Question 6

When critically appraising a research article, a pediatric resident identifies that the study has a small sample size, potentially leading to low statistical power. Which of the following is the MOST appropriate concern related to low statistical power in this context?

Applications to Clinical Practice

Role of Clinical Guidelines in Pediatrics

Introduction to Clinical Guidelines

  • Definition: Clinical guidelines are systematically developed statements that assist clinicians and patients in making decisions about appropriate healthcare for specific clinical circumstances.
  • Purpose: They aim to improve the quality of care by providing evidence-based recommendations.
  • Structure: Virtually all guidelines begin with a systematic review of the literature.

Meta-Analyses (Part 1)

Overview and Key Concepts

  • What is a Meta-Analysis?
    • Meta-analysis is a statistical technique that combines the results of multiple studies to produce a single estimate of the overall effect.
  • Effect Size and Models:
    • Effect Size: A measure of the strength of the relationship between variables across studies (e.g., odds ratio, relative risk, mean difference).
    • Fixed-Effect Model: Assumes that all studies estimate the same effect size; variability is due to random error.
    • Random-Effects Model: Assumes that effect sizes vary across studies due to real differences; accounts for between-study variability.

Meta-Analyses (Part 2)

Advanced Techniques and Interpretation

  • Forest Plots:
    • Visual representation of individual study results and the overall pooled estimate.
    • Each study is represented as a line, with the size of the marker indicating the weight of the study.
  • Heterogeneity Assessment:
    • I² Statistic: Measures the proportion of variability in effect estimates due to heterogeneity rather than chance.
    • Cochran’s Q Test: Statistical test for heterogeneity; significant results indicate the presence of heterogeneity.
  • Publication Bias Assessment:
    • Funnel Plots: Scatterplot of study effect sizes against sample size to detect asymmetry, which may indicate publication bias.

Interactive Funnel Plot

Funnel Plot Widget Test

Interpretting Funnel Plots

:::

Interpretting Forest Plots

:::

Systematic Reviews (Part 1)

Overview and Process

  • What is a Systematic Review?
    • A systematic review is a structured approach to reviewing the literature, aiming to answer a specific research question by synthesizing all relevant studies.
  • Key Steps in Conducting a Systematic Review:
    • Formulate Research Question: Use PICO (Population, Intervention, Comparison, Outcome) framework to define the scope.
    • Search Strategy: Conduct comprehensive searches in multiple databases like PubMed, Cochrane, and Embase.
    • Study Selection: Apply inclusion and exclusion criteria to identify relevant studies.
    • Data Extraction: Systematically collect data from selected studies.

Systematic Reviews (Part 2)

Quality Assessment and Bias

  • Critical Appraisal of Studies:
    • Risk of Bias: Evaluate potential biases in the included studies using tools like the Cochrane Risk of Bias tool.
    • Quality Assessment: Assess the quality of evidence using frameworks like GRADE (Grading of Recommendations Assessment, Development, and Evaluation).
  • Synthesizing Evidence:
    • Narrative Synthesis: Summarize the findings from studies qualitatively.
    • Statistical Synthesis: If appropriate, use meta-analysis to quantitatively combine data.
  • Common Biases in Systematic Reviews:
    • Publication Bias: Studies with positive results are more likely to be published.
    • Selection Bias: Inconsistent inclusion criteria across studies can affect outcomes.

Systematic Reviews (Part 3)

Example of a Bias Assessment by Cochrane

Summary